97 research outputs found

    Review on the methods of automatic liver segmentation from abdominal images

    Get PDF
    Automatic liver segmentation from abdominal images is challenging on the aspects of segmentation accuracy, automation and robustness. There exist many methods of liver segmentation and ways of categorisingthem. In this paper, we present a new way of summarizing the latest achievements in automatic liver segmentation.We categorise a segmentation method according to the image feature it works on, therefore better summarising the performance of each category and leading to finding an optimal solution for a particular segmentation task. All the methods of liver segmentation are categorized into three main classes including gray level based method, structure based method and texture based method. In each class, the latest advance is reviewed with summary comments on the advantages and drawbacks of each discussed approach. Performance comparisons among the classes are given along with the remarks on the problems existed and possible solutions. In conclusion, we point out that liver segmentation is still an open issue and the tendency is that multiple methods will be employed to-gether to achieve better segmentation performance

    Fusing Text and Image for Event Detection in Twitter

    Full text link
    In this contribution, we develop an accurate and effective event detection method to detect events from a Twitter stream, which uses visual and textual information to improve the performance of the mining process. The method monitors a Twitter stream to pick up tweets having texts and images and stores them into a database. This is followed by applying a mining algorithm to detect an event. The procedure starts with detecting events based on text only by using the feature of the bag-of-words which is calculated using the term frequency-inverse document frequency (TF-IDF) method. Then it detects the event based on image only by using visual features including histogram of oriented gradients (HOG) descriptors, grey-level cooccurrence matrix (GLCM), and color histogram. K nearest neighbours (Knn) classification is used in the detection. The final decision of the event detection is made based on the reliabilities of text only detection and image only detection. The experiment result showed that the proposed method achieved high accuracy of 0.94, comparing with 0.89 with texts only, and 0.86 with images only.Comment: 9 Pages, 4 figuer

    Multiple Kernel-Based Multimedia Fusion for Automated Event Detection from Tweets

    Get PDF
    A method for detecting hot events such as wildfires is proposed. It uses visual and textual information to improve detection. Starting with picking up tweets having texts and images, it preprocesses the data to eliminate unwanted data, transforms unstructured data into structured data, then extracts features. Text features include term frequency-inverse document frequency. Image features include histogram of oriented gradients, gray-level co-occurrence matrix, color histogram, and scale-invariant feature transform. Next, it inputs the features to the multiple kernel learning (MKL) for fusion to automatically combine both feature types to achieve the best performance. Finally, it does event detection. The method was tested on Brisbane hailstorm 2014 and California wildfires 2017. It was compared with methods that used text only or images only. With the Brisbane hailstorm data, the proposed method achieved the best performance, with a fusion accuracy of 0.93, comparing to 0.89 with text only, and 0.85 with images only. With the California wildfires data, a similar performance was recorded. It has demonstrated that event detection in Twitter is enhanced and improved by combination of multiple features. It has delivered an accurate and effective event detection method for spreading awareness and organizing responses, leading to better disaster management

    Liver Segmentation from CT Images Using a Modified Distance Regularized Level Set Model Based on a Novel Balloon Force

    Get PDF
    Organ segmentation from medical images is still an open problem and liver segmentation is a much more challenging task among other organ segmentations. This paper presents a liver egmentation method from a sequence of computer tomography images.We propose a novel balloon force that controls the direction of the evolution process and slows down the evolving contour in regions with weak or without edges and discourages the evolving contour from going far away from the liver boundary or from leaking at a region that has a weak edge, or does not have an edge. The model is implemented using a modified Distance Regularized Level Set (DRLS) model. The experimental results show that the method can achieve a satisfactory result. Comparing with the original DRLS model, our model is more effective in dealing with over segmentation problems

    Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    Get PDF
    Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent

    3D Face Reconstruction from Single 2D Image Using Distinctive Features

    Get PDF
    3D face reconstruction is considered to be a useful computer vision tool, though it is difficult to build. This paper proposes a 3D face reconstruction method, which is easy to implement and computationally efficient. It takes a single 2D image as input, and gives 3D reconstructed images as output. Our method primarily consists of three main steps: feature extraction, depth calculation, and creation of a 3D image from the processed image using a Basel face model (BFM). First, the features of a single 2D image are extracted using a two-step process. Before distinctive-features extraction, a face must be detected to confirm whether one is present in the input image or not. For this purpose, facial features like eyes, nose, and mouth are extracted. Then, distinctive features are mined by using scale-invariant feature transform (SIFT), which will be used for 3D face reconstruction at a later stage. Second step comprises of depth calculation, to assign the image a third dimension. Multivariate Gaussian distribution helps to find the third dimension, which is further tuned using shading cues that are obtained by the shape from shading (SFS) technique. Thirdly, the data obtained from the above two steps will be used to create a 3D image using BFM. The proposed method does not rely on multiple images, lightening the computation burden. Experiments were carried out on different 2D images to validate the proposed method and compared its performance to those of the latest approaches. Experiment results demonstrate that the proposed method is time efficient and robust in nature, and it outperformed all of the tested methods in terms of detail recovery and accuracy

    Visions, Values, and Videos: Revisiting Envisionings in Service of UbiComp Design for the Home

    Get PDF
    UbiComp has been envisioned to bring about a future dominated by calm computing technologies making our everyday lives ever more convenient. Yet the same vision has also attracted criticism for encouraging a solitary and passive lifestyle. The aim of this paper is to explore and elaborate these tensions further by examining the human values surrounding future domestic UbiComp solutions. Drawing on envisioning and contravisioning, we probe members of the public (N=28) through the presentation and focus group discussion of two contrasting animated video scenarios, where one is inspired by "calm" and the other by "engaging" visions of future UbiComp technology. By analysing the reasoning of our participants, we identify and elaborate a number of relevant values involved in balancing the two perspectives. In conclusion, we articulate practically applicable takeaways in the form of a set of key design questions and challenges.Comment: DIS'20, July 6-10, 2020, Eindhoven, Netherland

    Automated medical image segmentation using a new deformable surface model

    No full text
    This paper introduces an automated medical image segmentation algorithm which can be used to locate volumetric objects such as brain tumor in Magnetic Resonance Imaging (MRI) images. The algorithm is novel in that it deals with MRI slices (or images) as a three dimension (3D) object as a whole. All the processes of segmentation are done in 3D space. First, it removes noisy voxels with 3D nonlinear anisotropic filtering. The filtering well preserves the intensity distribution continuity in all three directions as well as smoothes noisy voxels. Second, it uses a novel deformable surface model to segment an object from the MRI. A dynamic gradient vector flow is used in forming the surface model. Experiments have been done on segmenting tumors from real MRI data of human head. Accurate 3D tumor segmentation has been achieved
    corecore